-
Notifications
You must be signed in to change notification settings - Fork 249
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spin Timer Tigger Loop Exit Feature #1381
base: main
Are you sure you want to change the base?
Conversation
b6006ad
to
609168f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a really interesting idea, and I see the value in it, but I have deep reservations about whether this is the right way to implement it. The fundamental issue is with maintaining the 'completed' state. Exiting the loop is fine within a single local Spin instance, but what happens when that instance restarts - for example, because spin watch
restarts it, or because you need to update the app, or because the computer needs to be rebooted, or because the VM is being upgraded and the workload needs to be moved to a different VM? Because all we've done is exit the timer loop in the original instance, the new instance has no memory that the component's work is done, and starts a new timer loop for it. We are relying on transient program flow in the host rather than structured, persistent application state.
And right now I don't have a better solution. Better solutions may be hard: Spin has an unarticulated design philosophy that not only are component instances stateless, but even the runtime itself is sorta-kinda stateless (stateful within a session for sure, but between sessions, not so much) - everything should be in the manifest. This shows up particularly in the cloud where applications can be evicted and rehydrated willy-nilly. A feature like this requires a concept of application state that we just don't have at the moment.
So I am not sure how to proceed. Part of me doesn't want to turn a simple PR into a vast and foundational architectural investigation. But at the same time, I don't feel comfortable introducing this behaviour ad hoc without thinking through the architectural implications (particularly for a sample that is meant to guide others).
I hope this makes sense, and I'd be interested to hear how you and other Spin folks feel about this and how to manage it. Thanks for the PR and for all the thoughts it has sparked!
Thanks @itowlson, I don't disagree with what you alluded to above, it's a valid concern. One way of thinking about this is that we can decouple the application level state management from Spin framework (to your point keeping it stateless) and let the components handle state between restarts as per their business logic and state store options provide by Spin e.g. KV store on a single machine or Redis if it's running on multiple machines, as an example. This is how I am addressing some of these concerns in my local branch where app restarts do not lose this state, as the state is maintained in KV store (single machine use-case). Would welcome yours/others thoughts on it. |
Signed-off-by: Suneet Nangia <[email protected]>
3c11659
to
7f5b197
Compare
This change allow components triggered by timer trigger plugin to exit when they want to, effectively, inversing the control to exit the timer loop as per their business logic. This is useful if you are composing a service which consist of multiple components (some based on pub-sub and others are not) and not all components needs to run in infinite loop.